ABSTRACT
Our work aims to generate new ideas to explore in a specific domain using generative language models. For example, doctors can write about known symptoms as cues to the system, and then the system will generate ideas based on the cues. Similar scenarios can be thought of for other scientific domains. We used transformer-based decoders, especially GPT3-based transformer decoders, as the language models and generators. As the data, we used COVID-19 open research dataset [18]. We finetuned GPT-NEO-125M and GPT-NEO-1.3B models with 125 million and 1.3 billion parameters, respectively. The later model generated more coherent text and could link ideas relevant to the same problem better. We report here our findings with examples generated from our finetuned models. © 2022 ACM.